571 research outputs found

    Artificial neural network-statistical approach for PET volume analysis and classification

    Get PDF
    Copyright © 2012 The Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.This article has been made available through the Brunel Open Access Publishing Fund.The increasing number of imaging studies and the prevailing application of positron emission tomography (PET) in clinical oncology have led to a real need for efficient PET volume handling and the development of new volume analysis approaches to aid the clinicians in the clinical diagnosis, planning of treatment, and assessment of response to therapy. A novel automated system for oncological PET volume analysis is proposed in this work. The proposed intelligent system deploys two types of artificial neural networks (ANNs) for classifying PET volumes. The first methodology is a competitive neural network (CNN), whereas the second one is based on learning vector quantisation neural network (LVQNN). Furthermore, Bayesian information criterion (BIC) is used in this system to assess the optimal number of classes for each PET data set and assist the ANN blocks to achieve accurate analysis by providing the best number of classes. The system evaluation was carried out using experimental phantom studies (NEMA IEC image quality body phantom), simulated PET studies using the Zubal phantom, and clinical studies representative of nonsmall cell lung cancer and pharyngolaryngeal squamous cell carcinoma. The proposed analysis methodology of clinical oncological PET data has shown promising results and can successfully classify and quantify malignant lesions.This study was supported by the Swiss National Science Foundation under Grant SNSF 31003A-125246, Geneva Cancer League, and the Indo Swiss Joint Research Programme ISJRP 138866. This article is made available through the Brunel Open Access Publishing Fund

    Fully automated accurate patient positioning in computed tomography using anterior-posterior localizer images and a deep neural network: a dual-center study

    Get PDF
    Objectives This study aimed to improve patient positioning accuracy by relying on a CT localizer and a deep neural network to optimize image quality and radiation dose. Methods We included 5754 chest CT axial and anterior–posterior (AP) images from two diferent centers, C1 and C2. After pre-processing, images were split into training (80%) and test (20%) datasets. A deep neural network was trained to generate 3D axial images from the AP localizer. The geometric centerlines of patient bodies were indicated by creating a bounding box on the predicted images. The distance between the body centerline, estimated by the deep learning model and ground truth (BCAP), was compared with patient mis-centering during manual positioning (BCMP). We evaluated the performance of our model in terms of distance between the lung centerline estimated by the deep learning model and the ground truth (LCAP). Results The error in terms of BCAP was − 0.75 ± 7.73 mm and 2.06 ± 10.61 mm for C1 and C2, respectively. This error was signifcantly lower than BCMP, which achieved an error of 9.35 ± 14.94 and 13.98 ± 14.5 mm for C1 and C2, respectively. The absolute BCAP was 5.7 ± 5.26 and 8.26 ± 6.96 mm for C1 and C2, respectively. The LCAP metric was 1.56 ± 10.8 and −0.27 ± 16.29 mm for C1 and C2, respectively. The error in terms of BCAP and LCAP was higher for larger patients (p value<0.01). Conclusion The accuracy of the proposed method was comparable to available alternative methods, carrying the advantage of being free from errors related to objects blocking the camera visibility

    Quantitative Analysis in Multimodality Imaging: Challenges and Opportunities

    Get PDF
    This talk reflects the tremendous ongoing interest in molecular and dual-modality imaging (PET/CT, SPECT/CT and PET/MR) as both clinical and research imaging modalities in the past decade. An overview of molecular multi-modality medical imaging instrumentation as well as simulation, reconstruction, quantification, and related image processing issues with special emphasis on quantitative analysis of nuclear medical images are presented. This tutorial aims to bring the biomedical image processing community a review on the state-of-the-art algorithms used and under development for accurate quantitative analysis in multimodality and multi-parametric molecular imaging and their validation mainly from the developer’s perspective with emphasis on image reconstruction and analysis techniques. It will inform the audience about a series of advanced development recently carried out at the PET instrumentation &amp; Neuroimaging Lab of Geneva University Hospital and other active research groups. Current and prospective future applications of quantitative molecular imaging also are addressed, especially its use prior to therapy for dose distribution modeling and optimization of treatment volumes in external radiation therapy and patient-specific 3D dosimetry in targeted therapy toward the concept of image-guided radiation therapy. &nbsp

    Multimodality molecular imaging: Paving the way for personalized medicine

    Get PDF
    Early diagnosis and therapy increasingly operate at the cellular, molecular or even at the genetic level. As diagnostic techniques transition from the systems to the molecular level, the role of multimodality molecular imaging becomes increasingly important. Positron emission tomography (PET), x-ray CT and MRI are powerful techniques for in vivo imaging. The inability of PET to provide anatomical information is a major limitation of standalone PET systems. Combining PET and CT proved to be clinically relevant and successfully reduced this limitation by providing the anatomical information required for localization of metabolic abnormalities. However, this technology still lacks the excellent soft-tissue contrast provided by MRI. Standalone MRI systems reveal structure and function, but cannot provide insight into the physiology and/or the pathology at the molecular level. The combination of PET and MRI, enabling truly simultaneous acquisition, bridges the gap between molecular and systems diagnosis. MRI and PET offer richly complementary functionality and sensitivity; fusion into a combined system offering simultaneous acquisition will capitalize the strengths of each, providing a hybrid technology that is greatly superior to the sum of its parts. This talk also reflects the tremendous increase in interest in quantitative molecular imaging using PET as both clinical and research imaging modality in the past decade. It offers a brief overview of the entire range of quantitative PET imaging from basic principles to various steps required for obtaining quantitatively accurate data from dedicated standalone PET and combined PET/CT and PET/MR systems including algorithms used to correct for physical degrading factors and to quantify tracer uptake and volume for radiation therapy treatment planning. Future opportunities and the challenges facing the adoption of multimodality imaging technologies and their role in biomedical research will also be addressed

    Deep Learning-based calculation of patient size and attenuation surrogates from localizer Image: Toward personalized chest CT protocol optimization

    Get PDF
    Purpose: Extracting water equivalent diameter (DW), as a good descriptor of patient size, from the CT localizer before the spiral scan not only minimizes truncation errors due to the limited scan field-of-view but also enables prior size-specific dose estimation as well as scan protocol optimization. This study proposed a unified methodology to measure patient size, shape, and attenuation parameters from a 2D anterior-posterior localizer image using deep learning algorithms without the need for labor-intensive vendor-specific calibration procedures. Methods: 3D CT chest images and 2D localizers were collected for 4005 patients. A modified U-NET architecture was trained to predict the 3D CT images from their corresponding localizer scans. The algorithm was tested on 648 and 138 external cases with fixed and variable table height positions. To evaluate the performance of the prediction model, structural similarity index measure (SSIM), body area, body contour, Dice index, and water equivalent diameter (DW) were calculated and compared between the predicted 3D CT images and the ground truth (GT) images in a slicewise manner. Results: The average age of the patients included in this study (1827 male and 1554 female) was 53.8 ± 17.9 (18–120) years. The DW, tube current ,and CTDIvol measured on original axial images in the external 138 cases group were significantly larger than those of the external 648 cases (P < 0.05). The SSIM and Dice index calculated between the prediction and GT for body contour were 0.998 ± 0.001 and 0.950 ± 0.016, respectively. The average percentage error in the calculation of DW was 2.7 ± 3.5 %. The error in the DW calculation was more considerable in larger patients (p-value < 0.05). Conclusions: We developed a model to predict the patient size, shape, and attenuation factors slice-by-slice prior to spiral scanning. The model exhibited remarkable robustness to table height variations. The estimated parameters are helpful for patient dose reduction and protocol optimization

    Real-time, acquisition parameter-free voxel-wise patient-specific Monte Carlo dose reconstruction in whole-body CT scanning using deep neural networks

    Get PDF
    Objective: We propose a deep learning-guided approach to generate voxel-based absorbed dose maps from whole-body CT acquisitions. Methods: The voxel-wise dose maps corresponding to each source position/angle were calculated using Monte Carlo (MC) simulations considering patient- and scanner-specific characteristics (SP_MC). The dose distribution in a uniform cylinder was computed through MC calculations (SP_uniform). The density map and SP_uniform dose maps were fed into a residual deep neural network (DNN) to predict SP_MC through an image regression task. The whole-body dose maps reconstructed by the DNN and MC were compared in the 11 test cases scanned with two tube voltages through transfer learning with/without tube current modulation (TCM). The voxel-wise and organ-wise dose evaluations, such as mean error (ME, mGy), mean absolute error (MAE, mGy), relative error (RE, %), and relative absolute error (RAE, %), were performed. Results: The model performance for the 120 kVp and TCM test set in terms of ME, MAE, RE, and RAE voxel-wise parameters was - 0.0302 ± 0.0244 mGy, 0.0854 ± 0.0279 mGy, - 1.13 ± 1.41%, and 7.17 ± 0.44%, respectively. The organ-wise errors for 120 kVp and TCM scenario averaged over all segmented organs in terms of ME, MAE, RE, and RAE were - 0.144 ± 0.342 mGy, and 0.23 ± 0.28 mGy, - 1.11 ± 2.90%, 2.34 ± 2.03%, respectively. Conclusion: Our proposed deep learning model is able to generate voxel-level dose maps from a whole-body CT scan with reasonable accuracy suitable for organ-level absorbed dose estimation. Clinical relevance statement: We proposed a novel method for voxel dose map calculation using deep neural networks. This work is clinically relevant since accurate dose calculation for patients can be carried out within acceptable computational time compared to lengthy Monte Carlo calculations. Key points: • We proposed a deep neural network approach as an alternative to Monte Carlo dose calculation. • Our proposed deep learning model is able to generate voxel-level dose maps from a whole-body CT scan with reasonable accuracy, suitable for organ-level dose estimation. • By generating a dose distribution from a single source position, our model can generate accurate and personalized dose maps for a wide range of acquisition parameters

    Current status and new horizons in Monte Carlo simulation of X-ray CT scanners

    Get PDF
    With the advent of powerful computers and parallel processing including Grid technology, the use of Monte Carlo (MC) techniques for radiation transport simulation has become the most popular method for modeling radiological imaging systems and particularly X-ray computed tomography (CT). The stochastic nature of involved processes such as X-ray photons generation, interaction with matter and detection makes MC the ideal tool for accurate modeling. MC calculations can be used to assess the impact of different physical design parameters on overall scanner performance, clinical image quality and absorbed dose assessment in CT examinations, which can be difficult or even impossible to estimate by experimental measurements and theoretical analysis. Simulations can also be used to develop and assess correction methods and reconstruction algorithms aiming at improving image quality and quantitative procedures. This paper focuses mainly on recent developments and future trends in X-ray CT MC modeling tools and their areas of application. An overview of existing programs and their useful features will be given together with recent developments in the design of computational anthropomorphic models of the human anatomy. It should be noted that due to limited space, the references contained herein are for illustrative purposes and are not inclusive; no implication that those chosen are better than others not mentioned is intende

    A Cone-Shaped Phantom for Assessment of Small Animal PET Scatter Fraction and Count Rate Performance

    Get PDF
    Purpose: Positron emission tomography (PET) image quality deteriorates as the object size increases owing to increased detection of scattered and random events. The characterization of the scatter component in small animal PET imaging has received little attention owing to the small scatter fraction (SF) when imaging rodents. The purpose of this study is first to design and fabricate a cone-shaped phantom which can be used for measurement of object size-dependent SF and noise equivalent count rates (NECR), and second, to assess these parameters for two small animal PET scanners as function of radial offset, object size and lower energy threshold (LET). Methods: The X-PET™ and LabPET-8™ scanners were modeled as realistically as possible using GATE Monte Carlo simulation platform. The simulation models were validated against experimental measurements in terms of sensitivity, SF and NECR. The dedicated phantom was fabricated in-house using high-density polyethylene. The optimized dimensions of the cone-shaped phantom are 158mm (length), 20mm (minimum diameter), 70mm (maximum diameter) and taper angle of 9°. Results: The relative difference between simulated and experimental results for the LabPET-8™ scanner varied between 0.7% and 10% except for a few results where it was below 16%. Depending on the radial offset from the center of the central axial field-of-view (3-6cm diameter), the SF for the cone-shaped phantom varied from 26.3% to 18.2%, 18.6 to 13.1% and 10.1 to 7.6% for the X-PET™, whereas it varied from 34.4% to 26.9%, 19.1 to 17.0% and 9.1 to 7.3% for the LabPET-8™, for LETs of 250, 350 and 425keV, respectively. The SF increases as the radial offset decreases, LET decreases and object size increases. The SF is higher for the LabPET-8™ compared with the X-PET™ scanner. The NECR increases as the radial offset increases and object size decreases. The maximum NECR was obtained at a LET of 350keV for the LabPET-8™ and 250keV for the X-PET™. High correlation coefficients for SF and NECR were observed between the cone-shaped phantom and an equivalent volume cylindrical phantom for the three considered axial fields of view. Conclusions: A single cone-shaped phantom enables the assessment of the impact of three factors, namely radial offset, LET and object size on PET SF and count rate estimates. This phantom is more realistic owing to the non-uniform shape of rodents' bodies compared to cylindrical uniform phantoms and seems to be well suited for evaluation of object size-dependent SF and NEC
    corecore